Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Federated learning algorithm for communication cost optimization
ZHENG Sai, LI Tianrui, HUANG Wei
Journal of Computer Applications    2023, 43 (1): 1-7.   DOI: 10.11772/j.issn.1001-9081.2021122054
Abstract750)   HTML49)    PDF (1156KB)(473)       Save
Federated Learning (FL) is a machine learning setting that can protect data privacy, however, the problems of high communication cost and client heterogeneity hinder the large?scale implementation of federated learning. To solve these two problems, a federated learning algorithm for communication cost optimization was proposed. First, the generative models from the clients were received and simulated data were generated by the server. Then, the simulated data were used by the server to train the global model and send it to the clients, and the final models were obtained by the clients through fine?tuning the global model. In the proposed algorithm only one round of communication between clients and the server was needed, and the fine?tuning of the client models was used to solve the problem of client heterogeneity. When the number of clients is 20, experiments were carried out on MNIST and CIFAR?10 dataset. The results show that the proposed algorithm can reduce the amount of communication data to 1/10 of that of Federated Averaging (FedAvg) algorithm on the MNIST dataset, and can reduce the amount of communication data to 1/100 of that of Federated Averaging (FedAvg) algorithm on the CIFAR-10 dataset with the premise of ensuring accuracy.
Reference | Related Articles | Metrics
GPU-based method for evaluating algebraic properties of cryptographic S-boxes
Jingwen CAI, Yongzhuang WEI, Zhenghong LIU
Journal of Computer Applications    2022, 42 (9): 2750-2756.   DOI: 10.11772/j.issn.1001-9081.2021081382
Abstract332)   HTML4)    PDF (2206KB)(97)       Save

Cryptographic S-boxes (or black boxes) are nonlinear components in symmetric encryption algorithms, and their algebraic properties usually determine the security performance of these encryption algorithms. Differential uniformity, nonlinearity and revised transparency order are three basic indicators to evaluate the security properties of cryptographic S-boxes. They describe the S-box’s ability against differential cryptanalysis, linear cryptanalysis and differential power attack respectively. When the input size of the cryptographic S-box is large (for example, the input length of the S-box is larger than 15 bits), the needed solving time in Central Processing Unit (CPU) is still too long, or even the solution is impracticable. How to evaluate the algebraic properties of the large-size S-box quickly is currently a research hot point in the field. Therefore, a method to evaluate the algebraic properties of cryptographic S-boxes quickly was proposed on the basis of Graphics Processing Unit (GPU). In this method, the kernel functions were split into multiple threads by slicing technique, and an optimization scheme was proposed by combining the characteristics of solving differential uniformity, nonlinearity and revised transparency order to realize parallel computing. Experimental results show that compared with CPU-based implementation environment, single GPU based environment has the implementation efficiency significantly improved. Specifically, the time spent on calculating differential uniformity, nonlinearity, and revised transparency order is saved by 90.28%, 80%, and 66.67% respectively, which verifies the effectiveness of this method.

Table and Figures | Reference | Related Articles | Metrics
Thick cloud removal algorithm for multi-temporal remote sensing images based on total variation model
WANG Rui, HUANG Wei, HU Nanqiang
Journal of Computer Applications    2020, 40 (7): 2126-2130.   DOI: 10.11772/j.issn.1001-9081.2019111902
Abstract398)      PDF (1436KB)(630)       Save
Brightness inconsistency and obvious boundary affect the reconstruction results of multi-temporal remote sensing images. In order to solve the problem, an improved thick cloud removal algorithm for multi-temporal remote sensing image was proposed by combining total variation model and Poisson equation. Firstly, the brightness correction coefficient was calculated by the brightness information of the common area of multi-temporal remote sensing images in order to correct the brightness of the images, so as to reduce the effect of brightness differences on cloud removal results. Then, multi-temporal images after brightness correction were reconstructed based on selective multi-source total variation model, and the fusion results' spatial smoothnesses and their similarities with the original images were improved. Finally, the local areas of the reconstruction image were optimized by using Poisson equation. The experimental results show that this method can effectively solve the problems of brightness inconsistency and boundary.
Reference | Related Articles | Metrics
Spatio-temporal hybrid prediction model for air quality
HUANG Weijian, LI Danyang, HUANG Yuan
Journal of Computer Applications    2020, 40 (11): 3385-3392.   DOI: 10.11772/j.issn.1001-9081.2020040471
Abstract333)      PDF (902KB)(570)       Save
Because the air quality in different regions of the city are correlated with each other in both time and space, the traditional deep learning model structure is relatively simple, and it is difficult to model from the perspectives of time and space. Aiming at this problem, a Spatio Temporal Air Quality Index (STAQI) model that can simultaneously extract the complex spatial and temporal relationships between air qualities was proposed for air quality prediction. The model was composed of local components and global components, which were used to describe the influences of local pollutant concentration and air quality states of adjacent sites on the air quality prediction of target site, and the prediction results were obtained by using the weighted fusion component output. In the global component, the graph convolutional network was used to improve the input part of the gated recurrent unit network, so as to extract the spatial characteristics of the input data. Finally, STAQI model was compared with various baseline models and variant models. Among them, the Root Mean Square Error (RMSE) of STAQI model is decreased by about 19% and 16% respectively compared with those of the gated recurrent unit model and the global component variant model. The results show that STAQI model has the best prediction performance for any time window, and the prediction results of different target sites verify the strong generalization ability of the model.
Reference | Related Articles | Metrics
Multi-class vehicle detection in surveillance video based on deep learning
XU Zihao, HUANG Weiquan, WANG Yin
Journal of Computer Applications    2019, 39 (3): 700-705.   DOI: 10.11772/j.issn.1001-9081.2018071587
Abstract1471)      PDF (976KB)(736)       Save
Since performance of traditional machine learning methods of detecting vehicles in traffic surveillance video is influenced by objective factors such as video quality, shooting angle and weather, which results in complex preprocessing, hard generalization and poor robustness, combined with dilated convolution, feature pyramid and focal loss, two deep learning models which are improved Faster R-CNN (Faster Regions with Convolutional Neural Network) and SSD (Single Shot multibox Detector) model were proposed for vehicle detection. Firstly, a dataset was composed of 851 labeled images captured from the surveillance video at different time. Secondly, improved and original models were trained under same training strategies. Finally, average accuracy of each model were calculated to evaluate. Experimental results show that compared with original Faster R-CNN and SSD, the average accuracies of the improved models improve 0.8 percentage points and 1.7 percentage points respectively. Both deep learning methods are more suitable for vehicle detection in complicated situation than traditional methods. The former has higher accuracy and slower speed, which is more suitable for video off-line processing, while the latter has lower accuracy and higher speed, which is more suitable for video real-time detection.
Reference | Related Articles | Metrics
Design of augmented reality navigation simulation system for pelvic minimally invasive surgery based on stereoscopic vision
GAO Qinquan, HUANG Weiping, DU Min, WEI Mengyu, KE Dongzhong
Journal of Computer Applications    2018, 38 (9): 2660-2665.   DOI: 10.11772/j.issn.1001-9081.2018020335
Abstract537)      PDF (1132KB)(346)       Save
Minimally invasive endoscopic surgery always remains a challenge due to the complexity of the anatomical location and the limitations of endoscopic vision. An Augmented Reality (AR) navigation system was designed for simulation of pelvic minimally invasive surgery. Firstly, a 3D model of pelvis which was segmented and reconstructed from the preoperative CT (Computed Tomography) was textured mapping with the real pelvic surgical video, and then a surgical video with the ground truth pose was simulated. The blank model was initially registered with the intraoperative video by a 2D/3D registration based on color consistency of visible surface points. After that, an accurate tracking of intraoperative endoscopy was performed using a stereoscopic tracking algorithm. According to the multi-DOFs (Degree Of Freedoms) transformation matrix of endoscopy, the preoperative 3D model could then be fused to the intraoperative vision to achieve an AR navigation. The experimental results show that the root mean square error of the estimated trajectory compared to the ground truth is 2.3933 mm, which reveals that the system can achieve a good AR display for visual navigation.
Reference | Related Articles | Metrics
High fidelity haze removal method for remote sensing images based on estimation of haze thickness map
WANG Yueyun, HUANG Wei, WANG Rui
Journal of Computer Applications    2018, 38 (12): 3596-3600.   DOI: 10.11772/j.issn.1001-9081.2018051149
Abstract354)      PDF (969KB)(302)       Save
The haze removal of remote sensing image may easily result in ground object distortion. In order to solve the problem, an improved haze removal algorithm was proposed on the basis of the traditional additive haze pollution model, which was called high fidelity haze removal method based on estimation for Haze Thickness Map (HTM). Firstly, the HTM was obtained by using the traditional additive haze removal algorithm, and the mean value of the cloudless areas was subtracted from the whole HTM to ensure the haze thickness of the cloudless areas closed to zero. Then, the haze thickness of blue ground objects was estimated alone in degraded images. Finally, the cloudless image was obtained by subtracting the finally optimized haze thickness map of different bands from the degraded image. The experiments were carried out for multiple optical remote sensing images with different resolution. The experimental results show that, the proposed method can effectively solve the serious distortion problem of blue ground objects, improve the haze removal effect of degrade images, and promote the data fidelity ability of cloudless areas.
Reference | Related Articles | Metrics
Improved automatic classification algorithm of software bug report in cloud environment
HUANG Wei, LIN Jie, JIANG Yu'e
Journal of Computer Applications    2016, 36 (5): 1212-1215.   DOI: 10.11772/j.issn.1001-9081.2016.05.1212
Abstract517)      PDF (705KB)(399)       Save
User-submitted bug reports are arbitrary and subjective. The accuracy of automatic classification of bug reports is not ideal. Hence it requires many human labors to intervention. With the bug reports database growing bigger and bigger, the problem of improving the accuracy of automatic classification of these reports is becoming urgent. A TF-IDF (Term Frequency-Inverse Document Freqency) based Naive Bayes (NB) algorithm was proposed. It not only considered the relationship of a term in different classes but also the relationship of a term inside a class. It was also implemented in distributed parallel environment of MapReduce model in Hadoop platform. The experimental results show that the proposed Naive Bayes algorithm improves the performance of F1 measument to 71%, which is 27 percentage points higher than the state-of-the-art method. And it is able to deal with massive amounts of data in distributed way by addding computational node to offer shorter running time and has better effective performance.
Reference | Related Articles | Metrics
Data-leakage detection scheme based on fingerprint and Bloom filters
HUANG Weiwen LUO Jia
Journal of Computer Applications    2014, 34 (7): 1922-1928.   DOI: 10.11772/j.issn.1001-9081.2014.07.1922
Abstract196)      PDF (1131KB)(484)       Save

Aiming at the problems that the existing Data-Leakage Prevention (DLP) solutions are based on generic search for keywords in outgoing data, and hence severely lack the ability to control data flow at a fine granularity with low false probability. In this paper, an DLP architecture based on the white-listing was firstly designed, which used a white-listing for providing the strong security of data transmission. On this basis, a data leakage detection algorithm by combining document fingerprinting with Bloom filters was proposed. This algorithm computed the optimal locations by using dynamic programming to minimize the memory overhead and enable high-speed implementation. The simulation results show that the proposed algorithm for checking the fingerprints for a large amount of documents at very low cost. For example, for 1TB of documents, the proposed solution only requires 340MB memory to achieve worst case expected detection lag (i.e. leakage length) of 1000Bytes.

Reference | Related Articles | Metrics
Implementation of Laplace transform on heterogeneous multi-core engineering and scientific computation accelerator coprocessor
HE Zhangqing HUANG Wei DAI Kui ZHENG Zhaoxia
Journal of Computer Applications    2014, 34 (2): 369-372.  
Abstract586)      PDF (767KB)(442)       Save
Engineering and Scientific Computation Accelerator (ESCA) is a heterogeneous multi-core architecture to accelerate computation-intensive parallel computing in scientific and engineering applications. This paper described an implementation of Laplace transform on the hybrid system based on ESCA coprocessor, and the performance of Laplace transform on the quad-core prototype ESCA was exploited. The experimental results show that the ESCA can accelerate program of compute-intensive applications fairly well.
Related Articles | Metrics
Texture-preserving shadow removal algorithm based on gradient domain
HUANG Wei FU Liqin WANG Chen
Journal of Computer Applications    2013, 33 (08): 2317-2319.  
Abstract584)      PDF (704KB)(434)       Save
Accurate shadow boundary detecting and texture-preserving are two critical difficulties in shadow removal. To solve these problems, a new shadow removal method based on gradient field was proposed. Firstly, shadow boundary was detected approximately. Then, the gradients in internal shadow region and shadow boundary were modified respectively to obtain the non-shadowed gradient field. Based on the gradient field, the information in shadow regions was recovered with Poisson equation. The experimental results with several images indicate that the method can remove shadow from images easily while preserving the textures in the shadow regions, and it is not sensitive to the accuracy of shadow boundary.
Related Articles | Metrics
Recommendation research based on general content probabilistic latent semantic analysis model
ZHANG Wei HUANG Wei XIA Limin
Journal of Computer Applications    2013, 33 (05): 1330-1333.   DOI: 10.3724/SP.J.1087.2013.01330
Abstract619)      PDF (587KB)(520)       Save
In the recommendation system, some new items and the accuracy issue cannot be well controlled. Therefore, a new recommendation method based on general content Probabilitistic Latent Semantic Analysis (PLSA) model was proposed. The general content PLSA model contained two latent variables indicating the user groups and item groups, and contained features of items that were trained by asymmetric learning algorithm. The experimental results show that the new method has good quality for recommendation on three different data sets.
Reference | Related Articles | Metrics
Multi-objective particle swarm optimization algorithm of multi-swarm co-evolution
PENG Hu HUANG Wei DENG Chang-shou
Journal of Computer Applications    2012, 32 (02): 456-460.   DOI: 10.3724/SP.J.1087.2012.00456
Abstract953)      PDF (658KB)(499)       Save
Particle Swarm Optimization (PSO) algorithm is a very competitive swarm intelligence algorithm for multi-objective optimization problems. Because it is easy to fall into local optimum solution, and the convergence and accuracy of Pareto set are not satisfactory, so the paper proposed a multi-objective particle swarm optimization algorithm of multi-swarm co-evolution based on decomposition (MOPSO_MC). In the proposed algorithm, each sub-swarm corresponded to a sub-problem decomposed by multi-objective decomposition method, and the authors constructed a new update strategy for the velocity. Each particle followed its own previous best position, sub-swarm best position and sub-swarm neighborhood best position, which resulted in enhancing the ability of local searching and getting evolutionary information from the neighborhood sub-swarm. Finally, the simulation results verify the convergence of the proposed algorithm, and the uniformity and correctness of the solution distribution with comparison of the state-of-the-art multi-objective particle swarm algorithm on ZDT test function.
Reference | Related Articles | Metrics
Automatic focusing algorithm based on improved gray contrast function
HUANG Wei-qiong YOU Lin-ru LIU Shao-jun
Journal of Computer Applications    2011, 31 (11): 3008-3009.   DOI: 10.3724/SP.J.1087.2011.03008
Abstract876)      PDF (457KB)(476)       Save
To meet the fast and accurate requirement of image measurement in the auto-focusing system of image measurement instrument, gray contrast function was improved as following: by making use of the features that the focus image has smaller range of the gray scale transition than the defocused image and calculating the average change in gray value, focusing was reached with the number of changes in gray-scale. The comparison shows that the improved gray scale contrast function has simpler time computational complexity and higher focusing sensitivity. The improved function, with good stability, has faster focusing speed and higher focusing accuracy than other methods in the auto-focusing system of image measurement instrument.
Related Articles | Metrics
Method of Boolean operation based on 3D grid model
CHEN Xuegong YANG Lan HUANG Wei JI Xing
Journal of Computer Applications    2011, 31 (06): 1543-1545.   DOI: 10.3724/SP.J.1087.2011.01543
Abstract2168)      PDF (644KB)(517)       Save
A kind of Boolean operational method based on a three-dimensional grid model was proposed. Firstly, through collision detection algorithm based on hierarchical bounding box tree of Oriented Bounding Box (ORB), the intersecting triangles could be got. Through the intersection test of the triangles, the intersecting lines could be obtained and the intersecting lines topology relations with the triangles could be established. Secondly, a regional division for the intersecting triangles was made through processing the three types of intersecting lines, so as to get a series of polygons, and carry out Delaunay triangulations for polygon to get the result area. Lastly, relation adjacency list was constructed based on solid containing relations, the polygons internal relation and external relation with other entities were judged, and the triangles were located according to the mesh model topology relations. Simultaneously, according to such Boolean operations as the intersection, union, and differences, according to the grid model topology relations were judged, the position of the triangles were judged and then the final results could be obtained. Experimental results show that this algorithm can achieve better results. Experimental results show that the lithology of intersecting parts is consistent with the entities and can verify the correctness and feasibility of the algorithm.
Related Articles | Metrics